195 research outputs found

    Mild solutions of semilinear elliptic equations in Hilbert spaces

    Full text link
    This paper extends the theory of regular solutions (C1C^1 in a suitable sense) for a class of semilinear elliptic equations in Hilbert spaces. The notion of regularity is based on the concept of GG-derivative, which is introduced and discussed. A result of existence and uniqueness of solutions is stated and proved under the assumption that the transition semigroup associated to the linear part of the equation has a smoothing property, that is, it maps continuous functions into GG-differentiable ones. The validity of this smoothing assumption is fully discussed for the case of the Ornstein-Uhlenbeck transition semigroup and for the case of invertible diffusion coefficient covering cases not previously addressed by the literature. It is shown that the results apply to Hamilton-Jacobi-Bellman (HJB) equations associated to infinite horizon optimal stochastic control problems in infinite dimension and that, in particular, they cover examples of optimal boundary control of the heat equation that were not treatable with the approaches developed in the literature up to now

    Verification Theorems for Stochastic Optimal Control Problems via a Time Dependent Fukushima - Dirichlet Decomposition

    Get PDF
    This paper is devoted to present a method of proving verification theorems for stochastic optimal control of finite dimensional diffusion processes without control in the diffusion term. The value function is assumed to be continuous in time and once differentiable in the space variable (C0,1C^{0,1}) instead of once differentiable in time and twice in space (C1,2C^{1,2}), like in the classical results. The results are obtained using a time dependent Fukushima - Dirichlet decomposition proved in a companion paper by the same authors using stochastic calculus via regularization. Applications, examples and comparison with other similar results are also given.Comment: 34 pages. To appear: Stochastic Processes and Their Application

    Stochastic Optimal Control with Delay in the Control: solution through partial smoothing

    Full text link
    Stochastic optimal control problems governed by delay equations with delay in the control are usually more difficult to study than the the ones when the delay appears only in the state. This is particularly true when we look at the associated Hamilton-Jacobi-Bellman (HJB) equation. Indeed, even in the simplified setting (introduced first by Vinter and Kwong for the deterministic case) the HJB equation is an infinite dimensional second order semilinear Partial Differential Equation (PDE) that does not satisfy the so-called "structure condition" which substantially means that "the noise enters the system with the control." The absence of such condition, together with the lack of smoothing properties which is a common feature of problems with delay, prevents the use of the known techniques (based on Backward Stochastic Differential Equations (BSDEs) or on the smoothing properties of the linear part) to prove the existence of regular solutions of this HJB equation and so no results on this direction have been proved till now. In this paper we provide a result on existence of regular solutions of such kind of HJB equations and we use it to solve completely the corresponding control problem finding optimal feedback controls also in the more difficult case of pointwise delay. The main tool used is a partial smoothing property that we prove for the transition semigroup associated to the uncontrolled problem. Such results holds for a specific class of equations and data which arises naturally in many applied problems

    Weak Dirichlet processes with a stochastic control perspective

    Get PDF
    The motivation of this paper is to prove verification theorems for stochastic optimal control of finite dimensional diffusion processes without control in the diffusion term, in the case that the value function is assumed to be continuous in time and once differentiable in the space variable (C0,1C^{0,1}) instead of once differentiable in time and twice in space (C1,2C^{1,2}), like in the classical results. For this purpose, the replacement tool of the It\^{o} formula will be the Fukushima-Dirichlet decomposition for weak Dirichlet processes. Given a fixed filtration, a weak Dirichlet process is the sum of a local martingale MM plus an adapted process AA which is orthogonal, in the sense of covariation, to any continuous local martingale. The mentioned decomposition states that a C0,1C^{0,1} function of a weak Dirichlet process with finite quadratic variation is again a weak Dirichlet process. That result is established in this paper and it is applied to the strong solution of a Cauchy problem with final condition. Applications to the proof of verification theorems will be addressed in a companion paper.Comment: 22 pages. To appear: Stochastic Processes and Their Application

    Optimal investment models with vintage capital: Dynamic Programming approach

    Get PDF
    The Dynamic Programming approach for a family of optimal investment models with vintage capital is here developed. The problem falls into the class of infinite horizon optimal control problems of PDE's with age structure that have been studied in various papers (see e.g. [11, 12], [30, 32]) either in cases when explicit solutions can be found or using Maximum Principle techniques. The problem is rephrased into an infinite dimensional setting, it is proven that the value function is the unique regular solution of the associated stationary Hamilton-Jacobi-Bellman equation, and existence and uniqueness of optimal feedback controls is derived. It is then shown that the optimal path is the solution to the closed loop equation. Similar results were proven in the case of finite horizon in [26][27]. The case of infinite horizon is more challenging as a mathematical problem, and indeed more interesting from the point of view of optimal investment models with vintage capital, where what mainly matters is the behavior of optimal trajectories and controls in the long run. The study of infinite horizon is performed through a nontrivial limiting procedure from the corresponding finite horizon problemsOptimal investment, vintage capital, age-structured systems, optimal control, dynamic programming, Hamilton-Jacobi-Bellman equations, linear convex control, boundary control

    On controlled linear diffusions with delay in a model of optimal advertising under uncertainty with memory effects

    Get PDF
    We consider a class of dynamic advertising problems under uncertainty in the presence of carryover and distributed forgetting effects, generalizing a classical model of Nerlove and Arrow. In particular, we allow the dynamics of the product goodwill to depend on its past values, as well as previous advertising levels. Building on previous work of two of the authors, the optimal advertising model is formulated as an infinite dimensional stochastic control problem. We obtain (partial) regularity as well as approximation results for the corresponding value function. Under specific structural assumptions we study the effects of delays on the value function and optimal strategy. In the absence of carryover effects, since the value function and the optimal advertising policy can be characterized in terms of the solution of the associated HJB equation, we obtain sharper characterizations of the optimal policy.Comment: numerical example added; minor revision

    Utility maximization with current utility on the wealth: regularity of solutions to the HJB equation

    Get PDF
    We consider a utility maximization problem for an investment-consumption portfolio when the current utility depends also on the wealth process. Such kind of problems arise, e.g., in portfolio optimization with random horizon or with random trading times. To overcome the difficulties of the problem we use the dual approach. We define a dual problem and treat it by means of dynamic programming, showing that the viscosity solutions of the associated Hamilton-Jacobi-Bellman equation belong to a suitable class of smooth functions. This allows to define a smooth solution of the primal Hamilton-Jacobi-Bellman equation, proving that this solution is indeed unique in a suitable class and coincides with the value function of the primal problem. Some financial applications of the results are provided
    corecore